skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Vashistha, Aditya"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Large language models (LLMs) are being increasingly integrated into everyday products and services, such as coding tools and writing assistants. As these embedded AI applications are deployed globally, there is a growing concern that the AI models underlying these applications prioritize Western values. This paper investigates what happens when a Western-centric AI model provides writing suggestions to users from a different cultural background. We conducted a cross-cultural controlled experiment with 118 participants from India and the United States who completed culturally grounded writing tasks with and without AI suggestions. Our analysis reveals that AI provided greater efficiency gains for Americans compared to Indians. Moreover, AI suggestions led Indian participants to adopt Western writing styles, altering not just what is written but also how it is written. These findings show that Western-centric AI models homogenize writing toward Western norms, diminishing nuances that differentiate cultural expression. 
    more » « less
  2. AI-driven tools are increasingly deployed to support low-skilled community health workers (CHWs) in hard-to-reach communities in the Global South. This paper examines how CHWs in rural India engage with and perceive AI explanations and how we might design explainable AI (XAI) interfaces that are more understandable to them. We conducted semi-structured interviews with CHWs who interacted with a design probe to predict neonatal jaundice in which AI recommendations are accompanied by explanations. We (1) identify how CHWs interpreted AI predictions and the associated explanations, (2) unpack the benefits and pitfalls they perceived of the explanations, and (3) detail how different design elements of the explanations impacted their AI understanding. Our findings demonstrate that while CHWs struggled to understand the AI explanations, they nevertheless expressed a strong preference for the explanations to be integrated into AI-driven tools and perceived several benefits of the explanations, such as helping CHWs learn new skills and improved patient trust in AI tools and in CHWs. We conclude by discussing what elements of AI need to be made explainable to novice AI users like CHWs and outline concrete design recommendations to improve the utility of XAI for novice AI users in non-Western contexts. 
    more » « less
  3. Home care workers (HCWs) are professionals who provide care to older adults and people with disabilities at home. However, HCWs are vulnerable and especially susceptible to wage theft, or not being paid their legally-entitled wages in full by their employers. Prior work has examined other low-wage work settings to show how technology is designed and deployed has the potential to both cause and address wage theft. We extend this work by examining the relationship between technology and wage theft in the home care context. We collaborated closely with a local grassroots organization to conduct interviews with workers and labor, legal, and payroll experts. We uncovered how the complex, volatile, and diverse nature of home care complicates the errors in time-tracking systems. Through design provocations and focus groups with workers and experts, we also investigated the potential of technology as a part of broader efforts to curb wage theft through educating and empowering isolated HCWs. While we found that approachable design could reduce errors in existing systems, make employer processes more transparent, and help workers exchange knowledge to build collective power, we also discuss concerns around burden, privacy, and accountability when designing technologies for HCWs and other low-wage workers. 
    more » « less
  4. Home health aides are paid professionals who provide long-term care to an expanding population of adults who need it. However, aides' work is often unrecognized by the broader caregiving team despite being in demand and crucial to care---an invisibility reinforced by ill-suited technological tools. In order to understand the invisible work aides perform and its relationship to technology design, we interviewed 13 aides employed by home care agencies in New York City. These aides shared examples that demonstrated the intertwined nature of both types of invisible work (i.e., emotions- and systems-based) and expanded the sociological mechanisms of invisibility (i.e., sociocultural, sociolegal, sociospatial) to include the sociotechnical. Through these findings, we investigate the opportunities, tensions, and challenges that could inform the design of tools created for these important, but often overlooked, frontline caregivers. 
    more » « less